Perceptron Learning with Discrete Weights

نویسندگان

  • Joaquim Marques de Sá
  • Carlos S. Felgueiras
چکیده

Perceptron learning bounds with real weights have been presented by several authors. In the present paper we study the perceptron learning task when using integer weights in [−k, k]. We present a sample complexity formula based on an exact counting result of the finite class of functions implemented by the perceptron, and show that this bound is less pessimistic than existing bounds for the discrete and, in certain conditions, also for the continuous weight cases.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Maximizing the Robustness of a Linear Threshold Classiier with Discrete Weights

Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. An interesting property of neural networks used as classiiers is their ability to provide some robustness on input noise. This paper presents eecient learning algorithms for the maximization of the robustness of a Perceptron and especially designed to t...

متن کامل

Training a perceptron in a discrete weight space.

Learning in a perceptron having a discrete weight space, where each weight can take 2L+1 different values, is examined analytically and numerically. The learning algorithm is based on the training of the continuous perceptron and prediction following the clipped weights. The learning is described by a new set of order parameters, composed of the overlaps between the teacher and the continuous/c...

متن کامل

A "Thermal" Perceptron Learning Rule

The thermal perceptron is a simple extension to Rosenblatt’s perceptron learning rule for training individual linear threshold units. It finds stable weights for nonseparable problems as well as separable ones. Experiments indicate that if a good initial setting for a temperature parameter, To, has been found, then the thermal perceptron outperforms the Pocket algorithm and methods based on gra...

متن کامل

How to keep the HG weights non-negative: the truncated Perceptron reweighing rule

The literature on error-driven learning in Harmonic Grammar (HG) has adopted the Perceptron reweighing rule. Yet, this rule is not suited to HG, as it fails at ensuring non-negative weights. A variant is thus considered which truncates the updates at zero, keeping the weights non-negative. Convergence guarantees and error bounds for the original Perceptron are shown to extend to its truncated v...

متن کامل

A Boosted Semi-Markov Perceptron

This paper proposes a boosting algorithm that uses a semi-Markov perceptron. The training algorithm repeats the training of a semi-Markov model and the update of the weights of training samples. In the boosting, training samples that are incorrectly segmented or labeled have large weights. Such training samples are aggressively learned in the training of the semi-Markov perceptron because the w...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005